Goto

Collaborating Authors

 financial fraud


When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms

Ren, Qibing, Zheng, Zhijie, Guo, Jiaxuan, Yan, Junchi, Ma, Lizhuang, Shao, Jing

arXiv.org Artificial Intelligence

In this work, we study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents. We investigate whether agents can collaborate in fraudulent behaviors, how such collaboration amplifies risks, and what factors influence fraud success. To support this research, we present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios based on realistic online interactions. The benchmark covers 28 typical online fraud scenarios, spanning the full fraud lifecycle across both public and private domains. We further analyze key factors affecting fraud success, including interaction depth, activity level, and fine-grained collaboration failure modes. Finally, we propose a series of mitigation strategies, including adding content-level warnings to fraudulent posts and dialogues, using LLMs as monitors to block potentially malicious agents, and fostering group resilience through information sharing at the societal level. Notably, we observe that malicious agents can adapt to environmental interventions. Our findings highlight the real-world risks of multi-agent financial fraud and suggest practical measures for mitigating them. Code is available at https://github.com/zheng977/MutiAgent4Fraud.


Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs

Yan, Yu, Sun, Sheng, Wang, Zhe, Lin, Yijun, Duan, Zenghao, zheng, zhifei, Liu, Min, yin, Zhiyi, Zhang, Jianping

arXiv.org Artificial Intelligence

With the development of Large Language Models (LLMs), numerous efforts have revealed their vulnerabilities to jailbreak attacks. Although these studies have driven the progress in LLMs' safety alignment, it remains unclear whether LLMs have internalized authentic knowledge to deal with real-world crimes, or are merely forced to simulate toxic language patterns. This ambiguity raises concerns that jailbreak success is often attributable to a hallucination loop between jailbroken LLM and judger LLM. By decoupling the use of jailbreak techniques, we construct knowledge-intensive Q\&A to investigate the misuse threats of LLMs in terms of dangerous knowledge possession, harmful task planning utility, and harmfulness judgment robustness. Experiments reveal a mismatch between jailbreak success rates and harmful knowledge possession in LLMs, and existing LLM-as-a-judge frameworks tend to anchor harmfulness judgments on toxic language patterns. Our study reveals a gap between existing LLM safety assessments and real-world threat potential.


Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing Security in Large Language Models

He, Yunhong, Qiu, Jianling, Zhang, Wei, Yuan, Zhengqing

arXiv.org Artificial Intelligence

Recent advancements in large language models (LLMs) have significantly enhanced capabilities in natural language processing and artificial intelligence. These models, including GPT-3.5 and LLaMA-2, have revolutionized text generation, translation, and question-answering tasks due to the transformative Transformer model. Despite their widespread use, LLMs present challenges such as ethical dilemmas when models are compelled to respond inappropriately, susceptibility to phishing attacks, and privacy violations. This paper addresses these challenges by introducing a multi-pronged approach that includes: 1) filtering sensitive vocabulary from user input to prevent unethical responses; 2) detecting role-playing to halt interactions that could lead to 'prison break' scenarios; 3) implementing custom rule engines to restrict the generation of prohibited content; and 4) extending these methodologies to various LLM derivatives like Multi-Model Large Language Models (MLLMs). Our approach not only fortifies models against unethical manipulations and privacy breaches but also maintains their high performance across tasks. We demonstrate state-of-the-art performance under various attack prompts, without compromising the model's core functionalities. Furthermore, the introduction of differentiated security levels empowers users to control their personal data disclosure. Our methods contribute to reducing social risks and conflicts arising from technological abuse, enhance data protection, and promote social equity. Collectively, this research provides a framework for balancing the efficiency of question-answering systems with user privacy and ethical standards, ensuring a safer user experience and fostering trust in AI technology.


How AI can transform transaction monitoring and prevent financial fraud

#artificialintelligence

Banks and fraudsters are engaged in a never-ending game of cat and mouse. On one side, fraudsters move money around to remove traces of criminality. On the other, banks are on the lookout for suspicious activity that indicates financial fraud. "Criminals put money through the financial system in a series of layers to mask its original source, getting to a point where that money is cleaned and can be used and integrated into the financial system for any kind of purchase or investment," says Livia Benisty, chief business officer and former global head of AML at, Banking Circle – a payments bank that is pioneering the use of AI in AML. Money laundering regulations require banks and financial services to demonstrate methods for spotting this behaviour.


Hitting the Books: How Southeast Asia's largest bank uses AI to fight financial fraud

Engadget

Yes, robots are coming to take our jobs. That's a good thing, we should be happy they are because those jobs they're taking kinda suck. Do you really want to go back to the days of manually monitoring, flagging and investigating the world's daily bank transfers in search of financial fraud and money laundering schemes? The company has spent years developing a cutting-edge machine learning system that heavily automates the minutia-stricken process of "transaction surveillance," freeing up human analysts to perform higher level work while operating in delicate balance with the antique financial regulations that bound the industry. Working with AI by Thomas H. Davenport and Steven M. Miller is filled with similar case studies from myriad tech industries, looking at commonplace human-AI collaboration and providing insight into the potential implications of these interactions.


How Machine Learning Can Prevent Credit Card Fraud

#artificialintelligence

Machine learning can reduce false positive and quickly detect credit card fraud. Using traditional methods to detect instances of credit card fraud slows down the process of resolving such issues. The application of machine learning in banking promises to find quicker and accurate solutions for all kinds of financial institutions. The advent of digitization in banking has introduced several cybersecurity-related issues in such finance-based organizations. For example, reported financial fraud had increased by 104% in the first quarter of 2020, compared to Q1 2019.


How Machine Learning Solutions are transforming the World of Financial Services? - BigStartups

#artificialintelligence

The Fintech sector has progressed beyond imagination. Just a few years ago, it took several weeks to get loans approved, but today, everything is processed online and it takes barely a day. Likewise, financial frauds used to occur very often and the financial safety of the user was a big concern worldwide. However in recent times, such fraudulent transactions have reduced considerably, though, online transactions have increased immensely. The mobile revolution and the emergence of trending technologies like machine learning have brought a paradigm shift in the fintech industry.


Finding Duplicate Invoices In-Flight with AI

#artificialintelligence

As with everything else in Coupa, AI has been thoughtfully applied to areas where it adds real value. One such area is financial fraud. Detecting financial fraud can be challenging, costly, and time-consuming for organizations. However, with Coupa's robust AI-powered fraud detection solution, Spend Guard, we are able to help customers catch fraud and errors in-flight before they are even paid. Within Spend Guard, one of the many checks that our customers have found valuable is in detecting duplicate invoices.


AI-Powered Decision Management Key for Global Credit Card Security

#artificialintelligence

While many fintech platforms focus on risk assessment, Brighterion has been solely dedicated to AI-powered decisioning for over 20 years. With a sharp focus on financial irregularities, Brighterion's AI decision-making algorithms provide real-time detection in financial fraud, credit risk, healthcare fraud, waste and abuse, and money laundering (AML). The role of artificial intelligence is taking top billing in the search for software that detects fraud and credit risk. Legacy solutions like rules-based decisioning are hard pressed to stay ahead of bad actors as fraud evolves and becomes more sophisticated. Machine learning rises to the top for its ability to learn from complex and widely varied data.


The Fintech Future: Accelerating the AI & ML Journey

#artificialintelligence

Artificial intelligence (AI) has assumed a growing influence within financial services in recent years, affecting areas such as credit decisions, risk management, fraud detection, and stress testing. And for many fintechs, it has been baked into the process from the outset, to the extent that usage of AI in the fintech market registered $6 billion in 2019 and is expected to reach $22 billion by 2025. Economic fallout from the pandemic, however, has accelerated the timetable for financial services firms to become mass adopters of AI and harness its predictive powers sooner rather than later. For digitally native fintechs, many of which have already embraced AI and its capabilities, this offers the opportunity to invest further in the technology and capitalise on the tools available to accelerate their journeys. Fintechs across the world are dealing with the effects of Covid-19 and face an uphill challenge in containing the impact of it on the financial system and broader economy. With rising unemployment and stagnated economies, individuals and companies are struggling with debt, while the world in general is awash in credit risk.